Federated learning is a distributed machine learning technique that ensures user privacy and enables multiple clients to jointly train a shared global model without transmitting local data. However, the frequent exchange of model parameters between numerous clients and the server results in heavy network delay and bandwidth limitation in federated learning. In view of that, we propose an efficient algorithm for federated learning using sparse ternary compression based on layer variation classification(LVC). First, use layer variation as a metric to assess the significance of each layer of the model parameters, and after client training, categorize the model parameters into different levels by the layer variation and sensitivity analysis. Then, during the upstream and downstream transmission of model parameters, we assign corresponding sparse and ternary quantization ratios for different levels to maximize compression efficiency while preserving crucial parameters. Finally, on the server side, a majority-layer aggregation strategy is adopted to further reduce the communication cost. Experimental results from image classification tasks conducted on MNIST and Fashion-MNIST datasets demonstrate that proposed LVC algorithm achieves high accuracy with minimal communication cost, thereby striking an optimal balance between communication efficiency and accuracy.